Technology is advancing at an alarming rate, as its footprints can be amenably noticed within the healthcare domain. The overpowering ambivalence of technology among the physician community is consequently self-explanatory. In contrast to the ever-increasing millennials over-reliance on the technological applications, baby boomer despise of the health information tools is far-famed. In the midst of all, one piece has yet to conform to the ever-evolving hi-tech revolution as we follow its footsteps through the unfolding. That seems to be the unanswered question of all on medico-legal risks menacing the healthcare unrestricted. Medicine- due to its peculiar nature is considered among the top litigation vulnerable professions.
Although the number of successful lawsuits invariably flickers; yet the trend seems to have an upward swerve. The reason why patients consider filing lawsuits and how physicians rift the standard of care varies, but in the bulk, if one digs deeper— besides deviation from the standard of care, often one is readily able to find a crumbled link within the chain of the bond between the doctor and their patient.
Trust, knowledge, concern, and loyalty are the basic elements of every healthy tie between physicians and patients. Such accord is a consensual manacle involving vulnerability and the trust between the two. In fact, it’s one of the most moving and meaningful experiences shared by any human being. Yet, the culminating rapport that originates from such a connection is not always perfect.
Traditional scholars have categorized the sleek kinship between the patient, and the physician into three categories: Guidance-Cooperation, Active-Passive, and Mutual Participation. Even though the mentioned categories have been acknowledged all throughout history, the last of the three is dominating today’s healthcare scenery. It’s the depiction of the increasing citizen’s access to boundless information, surpassing know-how, as well as the advancement of consumerism. Other factors perhaps termed that could potentially disrupt their associations, including time constraints, language barriers, transparency, cultural norms, and personal attitude.
For the contemplation of simplicity; any instrumentality that intervenes between a patient and physician must ideally be in concordance with the unique bond established between the two parties. Hence if not synchronized, it will potentially break one or more elements of the relationship. Disrupting factors of elements are subject to variations and de novo debut. One potential demagogue of our time is the application of unregulated artificial intelligence algorithms for medical applications along with healthcare logistics. According to a recent article published in the JAMA network; the physicians are facing imminent threats by using unregulated and partially validated Artificial Intelligence (AI) technologies.
Technology is to ease up some of the burdens from physician responsibilities and make their job efficient and precise. Nonetheless, with the ever-changing industry business requirements, the algorithms need to be periodically updated and synchronized. With no hesitation, this applies to the healthcare industry; especially with regards to updating and synchronizing the standard of medical care for a specific time, place and medical practice.
The practice of medicine is constantly changing and so are the prevailing clinical care, technology, and social norms. The inherent multifaceted volatility within medical triumph is the commitment amid physicians and patient conflict.
To further discern, let’s go over some definitions.
Medical practice is a professional functioning by an individual recognition by its peers to serve as an expert in the field of medicine. Its primary mission is to treat often; cure sometimes; and comfort always according to his or her Hippocratic Oath. The pledge applies to care for a person who consents to take part as the patient through the establishment of an alliance. Thus, the Clinical disposition built on trust through the utter concurrence of both parties within the context of knowledge, skills and ethics stands the ultimate goal.
In contrary to other aptitudes, the clinical decision-making remains by far the most variant prone of all; from doctor-patient interaction to the unique individual requirements and societal factors in between. Thus, justifying the standard of care does not cardinally pertain to an emblematic staple thought of within the other industries. Medical care ideal is subject to shifting and variations in time, place, person, resources, accepted societal norms and economic determinants. Standard of medical care is where physicians, through a similar training level and in the alike medical communities, would offer, thus under the comparable circumstances breach of the standard, would be the cause of the alleged malpractice.
The deviation from the standard of care is merely the outcome of crotch from the common clinical judgment. In the modern sense, almost every physician must rely on some form of an apparatus to reach the meticulous medical decision. Conventionally one such instrument would make up a stethoscope ancillary to enable physicians to listen to the heart sounds. Still; today’s advancement of science has popularized more sophisticated gadgets into the doctor’s practice. Seamless to say, like the stethoscope, if a physician is not able to use the AI technology according to the specifications, its potential fallout would result in awry decisions on the doctor’s behalf, hence subject to Medico legal proceedings.
Artificial intelligence, machine learning (ML) or deep learning (DL)
In the world of computer science, artificial intelligence (AI), also referred to as machine intelligence, is the description of the acumen demonstrated by machines. That falls in contrast with the common human intellect we all are familiar with. Leading AI textbooks define the latter discipline as the study of “intelligent agents”.
For an AI to function, it must first learn. The learning process is with the help of an application -called “Machine learning” which provides the latter with the independent ability to shadow learn its human counsel through aggressive data mining using various sensors and metadata input system, algorithmic protocols and trailing humans as well as the ability to self-improve functionality without requirements for explicit programming.
Deep learning (DL)- also a subset within the artificial intelligence (AI) provides a technology or networks capable of learning “unsupervised” data that is “unstructured or unlabeled”. Also known as deep neural learning or deep neural network deep learning functions as the self-sufficient capability of Artificial intelligence in medicine and healthcare as unlimited and liberating, hence making human intelligent contributions to the artificial intellectual development even more consequential for ensuring ethical and legal compliant precedence.
It has been the recent common conviction among some technocrats who place liberal faith on AI capacities; by way of it will ultimately diagnose diseases and offer the right treatment options even flawlessly than human. And are relentlessly sure that machines will be able to learn, make differential diagnostic workup and make the best choice of treatment decisions for the particular patient without physician participation.
It’s utterly premature and radical to take on such a premise. Still, provided the help of the doubt, let’s speculate such a scenario is probable; a synopsis where the doctor-patient liaison, is in fact, machine-patient relationship or corporate-patient affiliation. Yet, it seems greatly presumptuous to contemplate achieving such a scenario would require a transition period where physicians must periodically intervene. With the swift pace of healthcare rushing towards robotic medicine, the human intervention must be considered detrimental to the medical community’s period of influence and safety. Failure to do so; will spawn a vacuum potentially drawing in the factors prone to swing the standard of medical care, adversely affecting the clinical judgment, hence exposing the physician to legal implications.
So, what if AI provides recommendations that are propagated without the capability to communicate the underlying differential diagnostic for the selected treatment choice?! Or, Machine learning was trained in unrelated clinical scenarios, using unreliable methods or on imprecise data sets.
Generally; by the commandment, the physician would be liable, if she or he does not adhere to the standard of care and as a direct result of such a particular deviation, injury transpire. Amidst the application of AI, one can potentially foresee many potential avenues open for legal remedy. (Fig. 1)
Due to its multifaceted nature, the deviation from the standard of care applicable to Artificial intelligence does not halt there. The continuous, yet frequent shift of social expectations, science, technology and sociopolitical landscape around medical practice; along with the ever-changing socioeconomic healthcare panorama; boil down to updating and confirming algorithms consequential parallel to those discrepancies. Nonetheless, considering the current medical community’s skepticism, and their disengagement from their technology domain, such a task is second to impossible.
The repercussion encompasses physician’s liability at the mercy of the tech industry and non-physician algorithms. Until we reach a point of time when the public is ready to place their faith in automation to stay healthy without human empathy or with full trust empathic transference the potential for legal implications will stay high and unpredictable.
Logically, we can all concur- utilizing the AI as a distinct form of instrument in medicine. But there must be a point of concession that correspondingly secures safety protocols throughout the maturation period. Hence, for that reason utility of AI beyond its functional requirements must be diligently validated.
Truly, we may be opening a “can-of-worms” without well-defined operational requirements. Machine learning and artificial intelligence are tools, help physicians perfect on History & physical exam, develop refined differential diagnostic workup, order appropriate tests and enhance patient collaboration. The contradiction between the physician’s recommendation and that of the machine must be delineated and transparent to the patients.
Establishing proper expectations is imperative, more so on possible complications and treatment failures. It ought to be coherent to every patient and physician alike. It must, as a result, be as sortable as the deviation from the boilerplate is related to technological failure or pure physician negligence.
How physicians plan to override the AI recommendation carries the unique legal and ethical challenges, more so if the algorithms are not disclosed to the physician in advance. Upon experiencing complications, how the particular process of clinical decision-making is perceived by Patients, Peers, or the legal system is a delicate issue to tackle.
Legal Liability and how to prevent
Indeed, overwhelming open-ended questions exist that must be answered before the healthcare community can place faith in a machine learning capable mechanism such as “Doctor Alexa”.
The decision to override the power of AI is a double-edged sword, as we may be putting our trust on Robots more than are willing to contemplate its consequences associated with it.
But what does the law trust?! – Physicians or tech industry?!- Slow but sure; steps in the right direction is the key!
The medical community’s attitude must change. The applicable amendment must follow claiming the ownership of a realm they have been losing to alternate industries for the past decade. So, let’s start with simplifying the clinical judgment process using DL and AI while figuring out how to harness the power of machine learning to shadow around every physician independently. So, we can mold the technology to apply pursuant to each physician’s personal custom and style of practice. Respectively, unleash its ability to use shared patient data towards improving outcomes through feedback and constructive criticism first outlining possible pros and cons. Having said- that is not reality today. There’s significant blurry revelation and discrepancy within current systems of applied science and medicine.
We must presume there’s a long road ahead of us before the medical community can reasonably show confidence in the full spectrum of available solutions. Ownership of the Healthcare domain by physicians is indispensable. The science of creating functional requirements and validating algorithms must be part of the medical prospectus. Imperatively Algorithms should deliver as intended for tactical medical care, devoid of any strategic undertaking to pivot corporate interest to financial gain.
Empower physicians by warranting the adaptability of Deep learning algorithms to individual circumstance, while designing them to act like a docile to physician vs attending as the independent provider. AI Must recognize the reference point for the standard of care for a specific scenario, time, place, and person. The physician with the help of patients ought to be able to redefine every case and have the legal, ethical and technical power to mutually override decisions, by making a personalized approach.
It’s Always wiser nevertheless safer to limit the Scope of AI applicability to focused and smaller at a time until science supporting DL has matured to accommodate every case exclusively. We must rid algorithms out of the access of inexpert; Limit the deep learning theorem to individual physicians or medical groups vs. making it universally available across the board to everyone. Avoid the bureaucratic process to bargain the quality and functionality of the tech formulation. Enable AI to Compare benchmark and make a recommendation, for the reason that clinical advice based on predetermined protocols is a road to pernicious healthcare.